Bias-variance tradeoff as darts

But the game of darts is more complicated

Two types of tradeoffs

  1. Explicit: Is some bias worth the increase in precision?

  2. Implicit: Improving precision without sacrificing unbiasedness?

Two types of tradeoffs

  1. Explicit: Is some bias worth the increase in precision?

  2. Implicit: Improving precision without sacrificing unbiasedness?

# Model
model = declare_model(N = 300, U = rnorm(N),
                      potential_outcomes(Y ~ 0.2 * Z + U))

# Inquiry
inquiry = declare_inquiry(ATE = mean(Y_Z_1 - Y_Z_0))

# Data strategy
assign = declare_assignment(Z = complete_ra(N, prob = 0.5))
measure = declare_measurement(Y = reveal_outcomes(Y ~ Z))

# Answer strategy
estimator = declare_estimator(Y ~ Z, inquiry = "ATE")

# Put research design together
rct = model + inquiry + assign + measure + estimator

Two types of tradeoffs

  1. Explicit: Is some bias worth the increase in precision?

  2. Implicit: Improving precision without sacrificing unbiasedness?

Two types of tradeoffs

  1. Explicit: Is some bias worth the increase in precision?

  2. Implicit: Improving precision without sacrificing unbiasedness?

Alternative experimental designs

Outcome measurement
Post-only Pre-post
Randomization
Complete Standard Pre-post
Block Block randomized Block randomized & pre-post

Precision-retention tradeoff

\[ SE(\widehat{ATE}_\text{Standard}) =\\ \sqrt{\frac{\text{Var}(Y_i(0)) + \text{Var}(Y_i(1)) + 2\text{Cov}(Y_i(0), Y_i(1))}{N-1}} \]

Precision-retention tradeoff

Precision-retention tradeoff

Precision-retention tradeoff